stockfish 8
AlphaZero vs Stockfish 8: A Landmark Battle of Human and Artificial Intelligence in Chess
Chess has long been regarded as one of the most intellectually challenging games in the world. It requires a deep understanding of strategy and the ability to anticipate and react to an opponent's moves. For years, chess has been dominated by human players who have honed their skills through years of practice and experience. However, in recent years, artificial intelligence has emerged as a formidable opponent on the chessboard. In 2017, Google's artificial intelligence company DeepMind introduced AlphaZero, an AI system that could teach itself how to play chess, shogi, and Go.
Acquisition of Chess Knowledge in AlphaZero
McGrath, Thomas, Kapishnikov, Andrei, Tomašev, Nenad, Pearce, Adam, Hassabis, Demis, Kim, Been, Paquet, Ulrich, Kramnik, Vladimir
What is learned by sophisticated neural network agents such as AlphaZero? This question is of both scientific and practical interest. If the representations of strong neural networks bear no resemblance to human concepts, our ability to understand faithful explanations of their decisions will be restricted, ultimately limiting what we can achieve with neural network interpretability. In this work we provide evidence that human knowledge is acquired by the AlphaZero neural network as it trains on the game of chess. By probing for a broad range of human chess concepts we show when and where these concepts are represented in the AlphaZero network. We also provide a behavioural analysis focusing on opening play, including qualitative analysis from chess Grandmaster Vladimir Kramnik. Finally, we carry out a preliminary investigation looking at the low-level details of AlphaZero's representations, and make the resulting behavioural and representational analyses available online.
- North America > Mexico > Gulf of Mexico (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
On the Causes and Consequences of Deviations from Rational Behavior
Zegners, Dainis, Sunde, Uwe, Strittmatter, Anthony
Traditionally, economists have focused on a rational decision maker - the "homo economicus" - to model human behavior. The observation of various deviations of behavior from the benchmark of optimizing rational decision making has motivated an entire field, behavioral economics. Research in this field has identified a plethora of different, partly distinct and partly interacting, behavioral biases, which are related to cognitive limitations, stress, limited memory, preference anomalies, and social interactions, among others. These biases are typically established by comparing actual behavior against a theoretical benchmark, often in simplistic, unrealistic, or abstract settings that are unfamiliar to the decision makers. Field evidence for behavioral biases among professionals is still scarce, mostly because of the difficulty to establish a rational benchmark in complex real-world settings. Consequently, most contributions focus on documenting a behavioral deviation in one particular dimension. This makes it often difficult to compare the behavioral biases documented in the literature. Moreover, deviations from rational behavior are usually seen as being related to suboptimal performance. However, this connotation often rests on a priori reasoning or value judgments because it is typically even harder or impossible to identify the consequences of deviations from the rational benchmark than the deviations themselves.
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.69)
- Information Technology > Game Theory (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Simulation of Human Behavior (0.48)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (0.45)
Google AI Achieves "Alien" Superhuman Mastery of Chess and Go in Mere Hours - The New Stack
News of a specialized computer program beating human champions at games like chess and Go might not surprise people as much as it might have before, as it did when Deep Blue beat world chess champ Garry Kasparov back in 1997, or even more recently when Google DeepMind's AI AlphaGo beat Lee Sedol in a stunning upset back in 2016. But the goal for AI researchers has always been to develop an artificial general intelligence (AGI) that's capable at not only merely mastering games, but also learning and solving all kinds of things in a general way, as humans do. And it seems that Google's subsidiary DeepMind has once again gotten one step closer to this goal with AlphaZero, their latest AI development. Their recently published pre-print research outlined how AlphaZero succeeded in handily beating one of the world's top chess engines -- after teaching itself and mastering the game in four hours and reaching a "superhuman" level of play in a mere 24 hours in not only chess but in two other different types of board games. The most remarkable thing about this latest evolution is that in contrast to finely hand-tuned game-playing programs, the only input AlphaZero had were the basic rules of the game.
- North America > United States > California > Los Angeles County > Long Beach (0.05)
- Asia > Japan (0.05)
AlphaZero Annihilates World's Best Chess Bot After Just Four Hours of Practicing
A few months after demonstrating its dominance over the game of Go, DeepMind's AlphaZero AI has trounced the world's top-ranked chess engine--and it did so without any prior knowledge of the game and after just four hours of self-training. AlphaZero is now the most dominant chess playing entity on the planet. In a one-on-one tournament against Stockfish 8, the reigning computer chess champion, the DeepMind-built system didn't lose a single game, winning or drawing all of the 100 matches played. AlphaZero is a modified version of AlphaGo Zero, the AI that recently won all 100 games of Go against its predecessor, AlphaGo. In addition to mastering chess, AlphaZero also developed a proficiency for shogi, a similar Japanese board game.
Google's 'superhuman' DeepMind AI claims chess crown
Google says its AlphaGo Zero artificial intelligence program has triumphed at chess against world-leading specialist software within hours of teaching itself the game from scratch. The firm's DeepMind division says that it played 100 games against Stockfish 8, and won or drew all of them. The research has yet to be peer reviewed. But experts already suggest the achievement will strengthen the firm's position in a competitive sector. "From a scientific point of view, it's the latest in a series of dazzling results that DeepMind has produced," the University of Oxford's Prof Michael Wooldridge told the BBC.